1,028 research outputs found
Recommended from our members
The Fine-Grained Complexity of Problems Expressible by First-Order Logic and Its Extensions
This dissertation studies the fine-grained complexity of model checking problems for fixed logical formulas on sparse input structures. The Orthogonal Vectors problem is an important and well-studied problem in fine-grained complexity: its hardness is implied by the Strong Exponential Time Hypothesis, and its hardness implies the hardness of many other interesting problems. We show that the Orthogonal Vectors problem is complete in the class of first-order model checking on sparse structures, under fine-grained reductions. In other words, the hardness of Orthogonal Vectors and the hardness of first-order model checking imply each other. This also gives us an improved algorithm for first-order model checking problems. Among all first-order logic formulas in prenex normal form, we have reasons to believe that quantifier structures and may be the hardest in computational complexity: If the Nondeterministic version of the Strong Exponential Time Hypothesis is true, formulas of these forms are the only hard ones under the Strong Exponential Time Hypothesis. We can add extensions to first-order logic to strengthen its expressive power. This work also studies the fine-grained complexity of first-order formulas with comparison on structures with total order, first-order formulas with transitive closure operations, first-order formulas of fixed quantifier rank, and first-order formulas of fixed variable complexity. We also introduce a technique that can be used to reduce from sequential problems on graphs to parallel problems on sets, which can be applied to extending the Least Weight Subsequence problems from linear structures to some special classes of graphs
A deep learning framework for multi-scale models based on physics-informed neural networks
Physics-informed neural networks (PINN) combine deep neural networks with the
solution of partial differential equations (PDEs), creating a new and promising
research area for numerically solving PDEs. Faced with a class of multi-scale
problems that include loss terms of different orders of magnitude in the loss
function, it is challenging for standard PINN methods to obtain an available
prediction. In this paper, we propose a new framework for solving multi-scale
problems by reconstructing the loss function. The framework is based on the
standard PINN method, and it modifies the loss function of the standard PINN
method by applying different numbers of power operations to the loss terms of
different magnitudes, so that the individual loss terms composing the loss
function have approximately the same order of magnitude among themselves. In
addition, we give a grouping regularization strategy, and this strategy can
deal well with the problem which varies significantly in different subdomains.
The proposed method enables loss terms with different magnitudes to be
optimized simultaneously, and it advances the application of PINN for
multi-scale problems
PSP: Pre-trained Soft Prompts for Few-Shot Abstractive Summarization
Few-shot abstractive summarization has become a challenging task in natural
language generation. To support it, we designed a novel soft prompts
architecture coupled with a prompt pre-training plus fine-tuning paradigm that
is effective and tunes only extremely light parameters. The soft prompts
include continuous input embeddings across an encoder and a decoder to fit the
structure of the generation models. Importantly, a novel inner-prompt placed in
the text is introduced to capture document-level information. The aim is to
devote attention to understanding the document that better prompts the model to
generate document-related content. The first step in the summarization
procedure is to conduct prompt pre-training with self-supervised pseudo-data.
This teaches the model basic summarizing capabilities. The model is then
fine-tuned with few-shot examples. Experimental results on the CNN/DailyMail
and XSum datasets show that our method, with only 0.1% of the parameters,
outperforms full-model tuning where all model parameters are tuned. It also
surpasses Prompt Tuning by a large margin and delivers competitive results
against Prefix-Tuning with 3% of the parameters.Comment: 12 page
- …